54 research outputs found

    S-Nav: Semantic-Geometric Planning for Mobile Robots

    Full text link
    Path planning is a basic capability of autonomous mobile robots. Former approaches in path planning exploit only the given geometric information from the environment without leveraging the inherent semantics within the environment. The recently presented S-Graphs constructs 3D situational graphs incorporating geometric, semantic, and relational aspects between the elements to improve the overall scene understanding and the localization of the robot. But these works do not exploit the underlying semantic graphs for improving the path planning for mobile robots. To that aim, in this paper, we present S-Nav a novel semantic-geometric path planner for mobile robots. It leverages S-Graphs to enable fast and robust hierarchical high-level planning in complex indoor environments. The hierarchical architecture of S-Nav adds a novel semantic search on top of a traditional geometric planner as well as precise map reconstruction from S-Graphs to improve planning speed, robustness, and path quality. We demonstrate improved results of S-Nav in a synthetic environment.Comment: 6 pages, 4 figure

    Attitude estimation using horizon detection in thermal images

    Get PDF
    The lack of redundant attitude sensors represents a considerable yet common vulnerability in many low-cost unmanned aerial vehicles. In addition to the use of attitude sensors, exploiting the horizon as a visual reference for attitude control is part of human pilots' training. For this reason, and given the desirable properties of image sensors, quite a lot of research has been conducted proposing the use of vision sensors for horizon detection in order to obtain redundant attitude estimation onboard unmanned aerial vehicles. However, atmospheric and illumination conditions may hinder the operability of visible light image sensors, or even make their use impractical, such as during the night. Thermal infrared image sensors have a much wider range of operation conditions and their price has greatly decreased during the last years, becoming an alternative to visible spectrum sensors in certain operation scenarios. In this paper, two attitude estimation methods are proposed. The first method consists of a novel approach to estimate the line that best fits the horizon in a thermal image. The resulting line is then used to estimate the pitch and roll angles using an infinite horizon line model. The second method uses deep learning to predict attitude angles using raw pixel intensities from a thermal image. For this, a novel Convolutional Neural Network architecture has been trained using measurements from an inertial navigation system. Both methods presented are proven to be valid for redundant attitude estimation, providing RMS errors below 1.7° and running at up to 48 Hz, depending on the chosen method, the input image resolution and the available computational capabilities

    Faster Optimization in S-Graphs Exploiting Hierarchy

    Full text link
    3D scene graphs hierarchically represent the environment appropriately organizing different environmental entities in various layers. Our previous work on situational graphs extends the concept of 3D scene graph to SLAM by tightly coupling the robot poses with the scene graph entities, achieving state-of-the-art results. Though, one of the limitations of S-Graphs is scalability in really large environments due to the increased graph size over time, increasing the computational complexity. To overcome this limitation in this work we present an initial research of an improved version of S-Graphs exploiting the hierarchy to reduce the graph size by marginalizing redundant robot poses and their connections to the observations of the same structural entities. Firstly, we propose the generation and optimization of room-local graphs encompassing all graph entities within a room-like structure. These room-local graphs are used to compress the S-Graphs marginalizing the redundant robot keyframes within the given room. We then perform windowed local optimization of the compressed graph at regular time-distance intervals. A global optimization of the compressed graph is performed every time a loop closure is detected. We show similar accuracy compared to the baseline while showing a 39.81% reduction in the computation time with respect to the baseline.Comment: 4 pages, 3 figures, IROS 2023 Workshop Pape

    A Review of Radio Frequency Based Localization for Aerial and Ground Robots with 5G Future Perspectives

    Full text link
    Efficient localization plays a vital role in many modern applications of Unmanned Ground Vehicles (UGV) and Unmanned aerial vehicles (UAVs), which would contribute to improved control, safety, power economy, etc. The ubiquitous 5G NR (New Radio) cellular network will provide new opportunities for enhancing localization of UAVs and UGVs. In this paper, we review the radio frequency (RF) based approaches for localization. We review the RF features that can be utilized for localization and investigate the current methods suitable for Unmanned vehicles under two general categories: range-based and fingerprinting. The existing state-of-the-art literature on RF-based localization for both UAVs and UGVs is examined, and the envisioned 5G NR for localization enhancement, and the future research direction are explored

    S-Graphs+: Real-time Localization and Mapping leveraging Hierarchical Representations

    Full text link
    In this paper, we present an evolved version of the Situational Graphs, which jointly models in a single optimizable factor graph, a SLAM graph, as a set of robot keyframes, containing its associated measurements and robot poses, and a 3D scene graph, as a high-level representation of the environment that encodes its different geometric elements with semantic attributes and the relational information between those elements. Our proposed S-Graphs+ is a novel four-layered factor graph that includes: (1) a keyframes layer with robot pose estimates, (2) a walls layer representing wall surfaces, (3) a rooms layer encompassing sets of wall planes, and (4) a floors layer gathering the rooms within a given floor level. The above graph is optimized in real-time to obtain a robust and accurate estimate of the robot's pose and its map, simultaneously constructing and leveraging the high-level information of the environment. To extract such high-level information, we present novel room and floor segmentation algorithms utilizing the mapped wall planes and free-space clusters. We tested S-Graphs+ on multiple datasets including, simulations of distinct indoor environments, on real datasets captured over several construction sites and office environments, and on a real public dataset of indoor office environments. S-Graphs+ outperforms relevant baselines in the majority of the datasets while extending the robot situational awareness by a four-layered scene model. Moreover, we make the algorithm available as a docker file.Comment: 8 Pages, 7 Figures, 3 Table

    Vision-based Situational Graphs Generating Optimizable 3D Scene Representations

    Full text link
    3D scene graphs offer a more efficient representation of the environment by hierarchically organizing diverse semantic entities and the topological relationships among them. Fiducial markers, on the other hand, offer a valuable mechanism for encoding comprehensive information pertaining to environments and the objects within them. In the context of Visual SLAM (VSLAM), especially when the reconstructed maps are enriched with practical semantic information, these markers have the potential to enhance the map by augmenting valuable semantic information and fostering meaningful connections among the semantic objects. In this regard, this paper exploits the potential of fiducial markers to incorporate a VSLAM framework with hierarchical representations that generates optimizable multi-layered vision-based situational graphs. The framework comprises a conventional VSLAM system with low-level feature tracking and mapping capabilities bolstered by the incorporation of a fiducial marker map. The fiducial markers aid in identifying walls and doors in the environment, subsequently establishing meaningful associations with high-level entities, including corridors and rooms. Experimental results are conducted on a real-world dataset collected using various legged robots and benchmarked against a Light Detection And Ranging (LiDAR)-based framework (S-Graphs) as the ground truth. Consequently, our framework not only excels in crafting a richer, multi-layered hierarchical map of the environment but also shows enhancement in robot pose accuracy when contrasted with state-of-the-art methodologies.Comment: 7 pages, 6 figures, 2 table

    From SLAM to Situational Awareness: Challenges and Survey

    Get PDF
    The knowledge that an intelligent and autonomous mobile robot has and is able to acquire of itself and the environment, namely the situation, limits its reasoning, decision-making, and execution skills to efficiently and safely perform complex missions. Situational awareness is a basic capability of humans that has been deeply studied in fields like Psychology, Military, Aerospace, Education, etc., but it has barely been considered in robotics, which has focused on ideas such as sensing, perception, sensor fusion, state estimation, localization and mapping, spatial AI, etc. In our research, we connected the broad multidisciplinary existing knowledge on situational awareness with its counterpart in mobile robotics. In this paper, we survey the state-of-the-art robotics algorithms, we analyze the situational awareness aspects that have been covered by them, and we discuss their missing points. We found out that the existing robotics algorithms are still missing manifold important aspects of situational awareness. As a consequence, we conclude that these missing features are limiting the performance of robotic situational awareness, and further research is needed to overcome this challenge. We see this as an opportunity, and provide our vision for future research on robotic situational awareness.Comment: 15 pages, 8 figure

    Situational Graphs for Robot Navigation in Structured Indoor Environments

    Get PDF
    Mobile robots should be aware of their situation, comprising the deep understanding of their surrounding environment along with the estimation of its own state, to successfully make intelligent decisions and execute tasks autonomously in real environments. 3D scene graphs are an emerging field of research that propose to represent the environment in a joint model comprising geometric, semantic and relational/topological dimensions. Although 3D scene graphs have already been combined with SLAM techniques to provide robots with situational understanding, further research is still required to effectively deploy them on-board mobile robots. To this end, we present in this paper a novel, real-time, online built Situational Graph (S-Graph), which combines in a single optimizable graph, the representation of the environment with the aforementioned three dimensions, together with the robot pose. Our method utilizes odometry readings and planar surfaces extracted from 3D LiDAR scans, to construct and optimize in real-time a three layered S-Graph that includes (1) a robot tracking layer where the robot poses are registered, (2) a metric-semantic layer with features such as planar walls and (3) our novel topological layer constraining the planar walls using higher-level features such as corridors and rooms. Our proposal does not only demonstrate state-of-the-art results for pose estimation of the robot, but also contributes with a metric-semantic-topological model of the environmentComment: 8 pages, 6 figures, RAL/IROS 202

    VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems

    Get PDF
    Indoor environments have abundant presence of high-level semantic information which can provide a better understanding of the environment for robots to improve the uncertainty in their pose estimate. Although semantic information has proved to be useful, there are several challenges faced by the research community to accurately perceive, extract and utilize such semantic information from the environment. In order to address these challenges, in this paper we present a lightweight and real-time visual semantic SLAM framework running on board aerial robotic platforms. This novel method combines low-level visual/visual-inertial odometry (VO/VIO) along with geometrical information corresponding to planar surfaces extracted from detected semantic objects. Extracting the planar surfaces from selected semantic objects provides enhanced robustness and makes it possible to precisely improve the metric estimates rapidly, simultaneously generalizing to several object instances irrespective of their shape and size. Our graph-based approach can integrate several state of the art VO/VIO algorithms along with the state of the art object detectors in order to estimate the complete 6DoF pose of the robot while simultaneously creating a sparse semantic map of the environment. No prior knowledge of the objects is required, which is a significant advantage over other works. We test our approach on a standard RGB-D dataset comparing its performance with the state of the art SLAM algorithms. We also perform several challenging indoor experiments validating our approach in presence of distinct environmental conditions and furthermore test it on board an aerial robot. Video:https://vimeo.com/368217703Released Code:https://bitbucket.org/hridaybavle/semantic_slam.git

    Laser-Based Reactive Navigation for Multirotor Aerial Robots using Deep Reinforcement Learning

    Get PDF
    Navigation in unknown indoor environments with fast collision avoidance capabilities is an ongoing research topic. Traditional motion planning algorithms rely on precise maps of the environment, where re-adapting a generated path can be highly demanding in terms of computational cost. In this paper, we present a fast reactive navigation algorithm using Deep Reinforcement Learning applied to multi rotor aerial robots. Taking as input the 2D-laser range measurements and the relative position of the aerial robot with respect to the desired goal, the proposed algorithm is successfully trained in a Gazebo-based simulation scenario by adopting an artificial potential field formulation. A thorough evaluation of the trained agent has been carried out both in simulated and real indoor scenarios, showing the appropriate reactive navigation behavior of the agent in the presence of static and dynamic obstacles
    • …
    corecore